POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray : Re: Scanline rendering in POV-Ray Server Time
4 Aug 2024 18:18:17 EDT (-0400)
  Re: Scanline rendering in POV-Ray  
From: Tom York
Date: 5 Jun 2003 05:10:02
Message: <web.3edf071ec1329458541c87100@news.povray.org>
Patrick Elliott wrote:
>But it will see the triangles that make them up, which takes lots of
>space.

I think the textures take up rather more. Games often minimise geometry when
they can get away with better texture maps.

>POV-Ray has a built in example of real time raytracing. It is small, but
>then you are dealing with an engine that is running on top of and OS and
>can't take full advantage of the hardware, since it will 'never' have
>100% total access to the processor. A card based one would likely be far more
>optimized, support possible speed improvements that don't exist in POV-Ray and
>have complete access to the full power of the chip running it. This would be
>slower why?

Neither will a software-only (I emphasize software only - no card support)
scanline renderer (under windows), yet they can work in realtime at higher
resolution. Surely there is an example that is optimised for game-style
geometry, rather than being the raytracing equivalent of lightwave?
I emphasize that I'm not denying your argument on strong evidence, but I
have not seen a realtime raytracer do what a decent realtime
scanline/S-buffer based system can do. I'd really like to, mind.

>That is a point, but nothing prevents you from making explodable objects
>from triangles. In facts, the increase in available memory by using
>primitives in those things that are not going to undergo such a change
>means you can use even more triangles and make the explosion even more
>realistic. Current AGP technology it reaching its limits as to how much
>you can shove through the door and use. Short of a major redesign of both
>the cards and the motherboards, simply adding more memory or a faster
>chip isn't going to cut it.

Until it actually happens I remain unconvinced. We've always been on the
edge of running out of bandwidth since the first 3D cards. Of course,
there's a
limit, but I hear convincing speculation each way on when we'll reach it.
If a competitive realtime raytracer on a chip is possible, why would it
require favours in terms of the competition running out of room for
triangles? It should be able to do better at any time.

>Again. Why would anyone design one that 'only' supported such primitives?
>That's like asking why POV-Ray supports meshes if we all think primitives
>are so great. You use what is appropriate for the circumstances. If you
>want a space ship that explodes into a hundred fragments use a mesh, if
>you want one that gets sliced in half by a beam weapon, then use a mesh
>along the cut line and primitive where it makes sense. Duh!

Speed and complexity (hence cost). Current cards (at the high end for games)
can cost a couple of hundred dollars. Any simplification that can be made
can save cost, and having a card that doesn't need to switch over from
using triangles to drawing boxes halfway through a scene is good for
efficiency. Your particular example seems to require conversion between
triangles and other primitives (the spaceship is at one point a mesh, and
then it's half a mesh and half something else), which is not exactly rapid
in many cases. Aside from that, what's POV got to do with it? Cards don't
support NURBS as primitives either, despite them being popular in
non-realtime scanline renderers. I don't think a raytracer on a card
designed for realtime game use has to solve exactly the same problems as a
top-level raytracer for non-realtime use. In the same way that the Quake
engine and 3D Studio aren't kin.

>Well, that kind of describes most of the new cards that come out. lol
>Yes, it would need compatibility with the previous systems, but that
>isn't exactly an impossibility.

It is more difficult when you have completely changed the philosophy behind
the card, but want it to remain compatible with the previous philosophy.
You don't agree?

>There is nothing to prevent using a second chip dedicated to processing
>such things and having it drop the result into a block of memory to be
>used like a 'normal' bitmap. This assumes that the speed increase gained
>by building the rendering engine into the card wouldn't offset the time
>cost of the procedural texture. In any case, there are ways around this
>issue, especially if such methods turn out to already be in use on the
>newer DirectX cards.

Then aren't you going to lose the advantage of generating textures on the
card? If I generate a bitmap by procedure or by artist and subsequent
loading, I must still store it. Newer cards do procedural shading on a
pixel as it's rendered (or so I thought), so no extra storage is required.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.